151 research outputs found

    Limitations and Prospects for Diffusion-Weighted MRI of the Prostate

    Get PDF
    Diffusion-weighted imaging (DWI) is the most effective component of the modern multi-parametric magnetic resonance imaging (mpMRI) scan for prostate pathology. DWI provides the strongest prediction of cancer volume, and the apparent diffusion coefficient (ADC) correlates moderately with Gleason grade. Notwithstanding the demonstrated cancer assessment value of DWI, the standard measurement and signal analysis methods are based on a model of water diffusion dynamics that is well known to be invalid in human tissue. This review describes the biophysical limitations of the DWI component of the current standard mpMRI protocol and the potential for significantly improved cancer assessment performance based on more sophisticated measurement and signal modeling techniques

    Geometric models of brain white matter for microstructure imaging with diffusion MRI

    Get PDF
    The research presented in this thesis models the diffusion-weighted MRI signal within brain white matter tissue. We are interested in deriving descriptive microstructure indices such as white matter axon diameter and density from the observed diffusion MRI signal. The motivation is to obtain non-invasive reliable biomarkers for early diagnosis and prognosis of brain development and disease. We use both analytic and numerical models to investigate which properties of the tissue and aspects of the diffusion process affect the diffusion signal we measure. First we develop a numerical method to approximate the tissue structure as closely as possible. We construct three-dimensional meshes, from a stack of confocal microscopy images using the marching cubes algorithm. The experiment demonstrates the technique using a biological phantom (asparagus). We devise an MRI protocol to acquire data from the sample. We use the mesh models as substrates in Monte-Carlo simulations to generate synthetic MRI measurements. To test the feasibility of the method we compare simulated measurements from the three-dimensional mesh with scanner measurements from the same sample and simulated measurements from an extruded mesh and much simpler parametric models. The results show that the three-dimensional mesh model matches the data better than the extruded mesh and the parametric models revealing the sensitivity of the diffusion signal to the microstructure. The second study constructs a taxonomy of analytic multi-compartment models of white matter by combining intra- and extra-axonal compartments from simple models. We devise an imaging protocol that allows diffusion sensitisation parallel and perpendicular to tissue fibres. We use the protocol to acquire data from two fixed rat brains, which allows us to fit, study and evaluate the models. We conclude that models which incorporate non-zero axon radius describe the measurements most accurately. The key observation is a departure of signals in the parallel direction from the two-compartment models, suggesting restriction, most likely from glial cells or binding of water molecules to the membranes. The addition of the third compartment can capture this departure and explain the data. The final study investigates the estimates using in vivo brain diffusion measurements. We adjust the imaging protocol to allow an in vivo MRI acquisition of a rat brain and compare and assess the taxonomy of models. We then select the models that best explain the in vivo data and compare the estimates with those from the ex vivo measurements to identify any discrepancies. The results support the addition of the third compartment model as per the ex vivo findings, however the ranking of the models favours the zero radius intra-axonal compartments

    Unsupervised Domain Adaptation with Semantic Consistency across Heterogeneous Modalities for MRI Prostate Lesion Segmentation

    Get PDF
    Any novel medical imaging modality that differs from previous protocols e.g. in the number of imaging channels, introduces a new domain that is heterogeneous from previous ones. This common medical imaging scenario is rarely considered in the domain adaptation literature, which handles shifts across domains of the same dimensionality. In our work we rely on stochastic generative modeling to translate across two heterogeneous domains at pixel space and introduce two new loss functions that promote semantic consistency. Firstly, we introduce a semantic cycle-consistency loss in the source domain to ensure that the translation preserves the semantics. Secondly, we introduce a pseudo-labelling loss, where we translate target data to source, label them by a source-domain network, and use the generated pseudo-labels to supervise the target-domain network. Our results show that this allows us to extract systematically better representations for the target domain. In particular, we address the challenge of enhancing performance on VERDICT-MRI, an advanced diffusion-weighted imaging technique, by exploiting labeled mp-MRI data. When compared to several unsupervised domain adaptation approaches, our approach yields substantial improvements, that consistently carry over to the semi-supervised and supervised learning settings

    Harnessing uncertainty in domain adaptation for mri prostate lesion segmentation

    Get PDF
    The need for training data can impede the adoption of novel imaging modalities for learning-based medical image analysis. Domain adaptation methods partially mitigate this problem by translating training data from a related source domain to a novel target domain, but typically assume that a one-to-one translation is possible. Our work addresses the challenge of adapting to a more informative target domain where multiple target samples can emerge from a single source sample. In particular we consider translating from mp-MRI to VERDICT, a richer MRI modality involving an optimized acquisition protocol for cancer characterization. We explicitly account for the inherent uncertainty of this mapping and exploit it to generate multiple outputs conditioned on a single input. Our results show that this allows us to extract systematically better image representations for the target domain, when used in tandem with both simple, CycleGAN-based baselines, as well as more powerful approaches that integrate discriminative segmentation losses and/or residual adapters. When compared to its deterministic counterparts, our approach yields substantial improvements across a broad range of dataset sizes, increasingly strong baselines, and evaluation measures

    Synthesizing VERDICT maps from standard DWI data using GANs

    Get PDF
    VERDICT maps have shown promising results in clinical settings discriminating normal from malignant tissue and identifying specific Gleason grades non-invasively. However, the quantitative estimation of VERDICT maps requires a specific diffusion-weighed imaging (DWI) acquisition. In this study we investigate the feasibility of synthesizing VERDICT maps from standard DWI data from multi-parametric (mp)-MRI by employing conditional generative adversarial networks (GANs). We use data from 67 patients who underwent both standard DWI-MRI and VERDICT MRI and rely on correlation analysis and mean squared error to quantitatively evaluate the quality of the synthetic VERDICT maps. Quantitative results show that the mean values of tumour areas in the synthetic and the real VERDICT maps were strongly correlated while qualitative results indicate that our method can generate realistic VERDICT maps that could supplement mp-MRI assessment for better diagnosis

    LEARNING TO DOWNSAMPLE FOR SEGMENTATION OF ULTRA-HIGH RESOLUTION IMAGES

    Get PDF
    Many computer vision systems require low-cost segmentation algorithms based on deep learning, either because of the enormous size of input images or limited computational budget. Common solutions uniformly downsample the input images to meet memory constraints, assuming all pixels are equally informative. In this work, we demonstrate that this assumption can harm the segmentation performance because the segmentation difficulty varies spatially (see Figure 1 “Uniform”). We combat this problem by introducing a learnable downsampling module, which can be optimised together with the given segmentation model in an end-to-end fashion. We formulate the problem of training such downsampling module as optimisation of sampling density distributions over the input images given their low-resolution views. To defend against degenerate solutions (e.g. over-sampling trivial regions like the backgrounds), we propose a regularisation term that encourages the sampling locations to concentrate around the object boundaries. We find the downsampling module learns to sample more densely at difficult locations, thereby improving the segmentation performance (see Figure 1 "Ours"). Our experiments on benchmarks of high-resolution street view, aerial and medical images demonstrate substantial improvements in terms of efficiency-and-accuracy trade-off compared to both uniform downsampling and two recent advanced downsampling techniques

    Estimation of the vascular fraction in brain tumors by VERDICT correlated with Perfusion MRI

    Get PDF
    corecore